43 research outputs found
Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction
Machine learning (ML) models are vulnerable to membership inference attacks
(MIAs), which determine whether a given input is used for training the target
model. While there have been many efforts to mitigate MIAs, they often suffer
from limited privacy protection, large accuracy drop, and/or requiring
additional data that may be difficult to acquire. This work proposes a defense
technique, HAMP that can achieve both strong membership privacy and high
accuracy, without requiring extra data. To mitigate MIAs in different forms, we
observe that they can be unified as they all exploit the ML model's
overconfidence in predicting training samples through different proxies. This
motivates our design to enforce less confident prediction by the model, hence
forcing the model to behave similarly on the training and testing samples. HAMP
consists of a novel training framework with high-entropy soft labels and an
entropy-based regularizer to constrain the model's prediction while still
achieving high accuracy. To further reduce privacy risk, HAMP uniformly
modifies all the prediction outputs to become low-confidence outputs while
preserving the accuracy, which effectively obscures the differences between the
prediction on members and non-members. We conduct extensive evaluation on five
benchmark datasets, and show that HAMP provides consistently high accuracy and
strong membership privacy. Our comparison with seven state-of-the-art defenses
shows that HAMP achieves a superior privacy-utility trade off than those
techniques.Comment: To appear at NDSS'2
Hybrid DOM-Sensitive Change Impact Analysis for JavaScript
JavaScript has grown to be among the most popular programming languages. However, performing change impact analysis on JavaScript applications is challenging due to features such as the seamless interplay with the DOM, event-driven and dynamic function calls, and asynchronous client/server communication. We first perform an empirical study of change propagation, the results of which show that the DOM-related and dynamic features of JavaScript need to be taken into consideration in the analysis since they affect change impact propagation. We propose a DOM-sensitive hybrid change impact analysis technique for Javascript through a combination of static and dynamic analysis. The proposed approach incorporates a novel ranking algorithm for indicating the importance of each entity in the impact set. Our approach is implemented in a tool called Tochal. The results of our evaluation reveal that Tochal provides a more complete analysis compared to static or dynamic methods. Moreover, through an industrial controlled experiment, we find that Tochal helps developers by improving their task completion duration by 78% and accuracy by 223%
Replay-based Recovery for Autonomous Robotic Vehicles from Sensor Deception Attacks
Sensors are crucial for autonomous operation in robotic vehicles (RV).
Physical attacks on sensors such as sensor tampering or spoofing can feed
erroneous values to RVs through physical channels, which results in mission
failures. In this paper, we present DeLorean, a comprehensive diagnosis and
recovery framework for securing autonomous RVs from physical attacks. We
consider a strong form of physical attack called sensor deception attacks
(SDAs), in which the adversary targets multiple sensors of different types
simultaneously (even including all sensors). Under SDAs, DeLorean inspects the
attack induced errors, identifies the targeted sensors, and prevents the
erroneous sensor inputs from being used in RV's feedback control loop. DeLorean
replays historic state information in the feedback control loop and recovers
the RV from attacks. Our evaluation on four real and two simulated RVs shows
that DeLorean can recover RVs from different attacks, and ensure mission
success in 94% of the cases (on average), without any crashes. DeLorean incurs
low performance, memory and battery overheads
A Low-cost Strategic Monitoring Approach for Scalable and Interpretable Error Detection in Deep Neural Networks
We present a highly compact run-time monitoring approach for deep computer
vision networks that extracts selected knowledge from only a few (down to
merely two) hidden layers, yet can efficiently detect silent data corruption
originating from both hardware memory and input faults. Building on the insight
that critical faults typically manifest as peak or bulk shifts in the
activation distribution of the affected network layers, we use strategically
placed quantile markers to make accurate estimates about the anomaly of the
current inference as a whole. Importantly, the detector component itself is
kept algorithmically transparent to render the categorization of regular and
abnormal behavior interpretable to a human. Our technique achieves up to ~96%
precision and ~98% recall of detection. Compared to state-of-the-art anomaly
detection techniques, this approach requires minimal compute overhead (as
little as 0.3% with respect to non-supervised inference time) and contributes
to the explainability of the model
Automation Derivation of Application-Aware Error Detectors Using Compiler Analysis
Coordinated Science Laboratory was formerly known as Control Systems LaboratoryNational Science Foundation / NSF ACI CNS-040634 and NSF CNS 05-24695Gigascale Systems Research CenterMotorola Corp
ThingsMigrate: Platform-Independent Migration of Stateful JavaScript IoT Applications
The Internet of Things (IoT) has gained wide popularity both in academic and industrial contexts. As IoT devices become increasingly powerful, they can run more and more complex applications written in higher-level languages, such as JavaScript. However, by their nature, IoT devices are subject to resource constraints, which require applications to be dynamically migrated between devices (and the cloud). Further, IoT applications are also becoming more stateful, and hence we need to save their state during migration transparently to the programmer.
In this paper, we present ThingsMigrate, a middleware providing VM-independent migration of stateful JavaScript applications across IoT devices. ThingsMigrate captures and reconstructs the internal JavaScript program state by instrumenting application code before run time, without modifying the underlying Virtual Machine (VM), thus providing platform and VM-independence. We evaluated ThingsMigrate against standard benchmarks, and over two IoT platforms and a cloud-like environment. We show that it can successfully migrate even highly CPU-intensive applications, with acceptable overheads (about 30%), and supports multiple migrations
SymPLFIED: Symbolic Program Level Fault Injection and Error Detection Framework
Coordinated Science Laboratory was formerly known as Control Systems LaboratoryNational Science Foundation / NSF-CNS-05-51665 and NSF-CNS-04-0635
Dynamic Tracking of Information Flow Signatures for Security Checking
Coordinated Science Laboratory was formerly known as Control Systems LaboratoryNational Science Foundation / NSF ACI CNS-040634 and NSF CNS 05-24695Gigascale Systems Research CenterMotorola Corp